JMIRx Med
◐ JMIR Publications Inc.
Preprints posted in the last 30 days, ranked by how well they match JMIRx Med's content profile, based on 31 papers previously published here. The average preprint has a 0.06% match score for this journal, so anything above that is already an above-average fit.
Chowdhury, A.; Irtiza, A.
Show abstract
Background: The urgent care departments in Europe face a structural paradox: accelerating digitalisation is accompanied by a patient population that is disproportionately unable to engage with standard digital tools. An internal analysis at the Emergency Department (Akutafdelingen) of Nordsjaellands Hospital in Hilleroed, Denmark found that 43% of emergency patients struggle with digital solutions - a figure that reflects the predictable composition of acute care populations rather than any individual failing. Objective: This paper presents the design, iterative development, and secondary validation of the ED Adaptive Interface (v5): a prototype adaptive patient terminal developed in response to this challenge. The system operationalises what the author terms impairment-first design - a methodology that treats the most constrained patient experience as the primary design problem and derives the standard experience as a subset. The interface configures itself in under ten seconds via nurse-led setup, adapting across four axes of impairment: visual, motor, speech, and cognitive. System: Version 4 supports five accessibility modes, a heatmap pain assessment grid, a Privacy and Dignity panel, a live workflow tracker with care notifications, structured dual-category help requests, and plain-language medical term definitions across four languages. Version 5, reported here for the first time, introduces a Condition Worsening Escalation button, a Referral Pathway Display, a "Why Am I Waiting?" triage explainer, a Symptom Progression Log, MinSP/Yellow Card Scan simulation, expanded language support (seven languages: English, Danish, Arabic with full RTL layout, Turkish, Romanian, Polish, and Somali), and an expanded ten-item Communication Board. The entire system runs as a single 79-kilobyte HTML file with zero infrastructure requirements. Methods: To base the design on patient-generated evidence, two independent social media threads were subjected to an inductive thematic analysis (Braun and Clarke, 2006): a primary corpus of 83 entries in the Facebook group Foreigners in Denmark (collected March 2026) and a corroborating corpus in an international community group in the Aarhus region (collected April 2026). All identifiers in both datasets were fully anonymised under GDPR Article 89 research provisions prior to analysis. No participants were contacted. Generative AI tools were used to assist with drafting, writing, and prototype code development; all scientific content, data collection, analysis, and conclusions are the sole responsibility of the authors. Results: The first discourse corpus produced five major themes corresponding to the five problem areas the prototype was designed to address: system navigation and triage literacy gaps (31 entries); language and cultural barriers (6 entries); communication failures during care (5 entries); staff overload and capacity constraints (8 entries); and pain and severity assessment failures (14 entries). The corroborating dataset supported all five themes and introduced two additional themes: differential treatment of international patients and medical gaslighting as a long-term pattern of patient advocacy failure. One structural finding - the five most-liked comments incorrectly criticised the original poster for self-referring when she had received explicit 1813 telephone triage approval - directly inspired the Referral Pathway Display and "Why Am I Waiting?" features in v5. Conclusions: The convergence of design rationale and independent social evidence across all five problem categories suggests that impairment-first design is not a niche accessibility concern but a structural approach to healthcare interface quality. The prototype is ready for a structured clinical pilot using the System Usability Scale (SUS) and semi-structured staff interviews. The long-term roadmap includes full MinSP integration, hospital PMS connectivity, and clinical validation.
da Luz, C. C.; Sorbello, C. C. J.; Epifanio, E. A.; dos Santos, C. d. A.; Brandi, S.; Guerra, J. C. d. C.; Wolosker, N.
Show abstract
Abstract: Background: Vascular access is essential in treating patients undergoing prolonged endovenous therapy such as chemotherapy, antibiotics, and parenteral nutrition. Since the 1990s, when PICCs (peripherally inserted central catheters) appeared, vascular access options have expanded significantly, revolutionizing the treatment landscape for all types of patients. Objective: To analyze and describe the profile of the use of PICCs in a Brazilian quaternary hospital over 10 years with data collected by the infusion therapy team. Evaluating the number of PICCs implanted over the years, patients epidemiology and clinical characteristics, insertion details, associated complications, and the reason for removal. Methods: A retrospective cohort study that employs a quantitative, non-experimental approach to classify and statistically analyze past events associated with 21,652 PICCs implanted from January 2012 to December 2021 in a quaternary hospital at Sao Paulo - Brazil. All the catheters were implanted, and the data was collected by a team of nurses specializing in infusion therapy. We analyzed the number of catheters implanted over the years, insertion characteristics, patients epidemiology and clinical data, possible associated complications, and the reason for removal. Statistical analyses were conducted using R software (version 4.4.1) and SPSS (version 29) for Windows (IBM Corp, Armonk, NY). Results: During the specified period, 21,652 catheters were analyzed. The patients gender distribution was nearly balanced (48.2% versus 51.8%), and the average age was 66 years. Cardiovascular and metabolic issues were the most common comorbidities, and between 2020 and 2021, 29.3% of the sample tested positive for COVID-19. The most common location of hospitalization and implantation was the medical-surgical clinic (31.6% - 41.4%), and the most used type of catheter was the Power Picc (83.9%). The estimated complication incidence density is 2.94 complications per 1,000 catheter-days. Almost all the PICCs (98,2%) were adequately located at the cavo-atrial junction after the first attempt, 82.2% of catheters were removed after therapy, and the median duration of catheter use was 12 days. Conclusion: PICCs are widely employed for drug infusion, with their use growing progressively due to specialized teams greater availability and training. The high efficiency of these devices with a relatively low risk of complications already observed in previous studies was reinforced by the findings of this study of more than 20,000 catheters.
Sathe, S. S.; Porter, N.; Miller, C.; Rockwell, M.
Show abstract
Abstract Background People with disabilities use technology, like search engines, to seek health information online. This health information includes information on coronavirus disease, or COVID-19. COVID-19 remains a public health concern. Research shows that people with disabilities encounter frustrations, or "pain points," when seeking online information, but little is known about these specific pain points and who encounters them. Objective The goals of this study are to determine pain points for people with disabilities who seek health information online, and to assess how pain points impact the experience of technology use and information seeking. Methods Ten participants recruited from a prior quantitative survey completed the concurrent think-aloud study over a month-long period. Participants completed four online search tasks and narrated their experiences in real-time while doing so. Transcripts were stored in Taguette; thematic analysis was performed on these transcripts. Findings Participants were predominantly white, with three identifying as Asian. All ten participants reported having disabilities. Participants with attention deficit hyperactivity disorder (ADHD) reported distracting webpage layout, whereas participants with physical disabilities reported physical fatigue while navigating online information. All participants encountered AI-generated information; only one participant indicated trust in the AI-generated information. Other common sources of information included hospital and governmental webpages, peer-reviewed articles, and news and advertising results. News and advertising results were especially common with respect to search results for "COVID-19 vaccine." Themes identified included the following: accessibility/usability, AI-generated information, government/hospital and related sources of information, peer-reviewed articles, news and advertising, and sentiment and trust. Conclusions Information can be fatiguing, distracting, or otherwise difficult to navigate for people with diverse disabilities searching for COVID-19 related information online. Further work should incorporate user feedback from people with disabilities when designing online content.
Sparnon, E.; Stevens, K.; Song, E.; Harris, R. J.; Strong, B. W.; Bruno, M. A.; Baird, G. L.
Show abstract
The present study evaluates the real-world clinical predictive performance of FDA-authorized artificial intelligence (AI) devices used in radiology, focusing on the false positive paradox (FPP) and its implications for clinical practice. To do this, we analyzed publicly available FDA data on AI radiology devices from 2024 and 2025 from 510(k) summaries, demonstrating how diagnostic accuracy metrics like sensitivity and specificity do not necessarily translate into high positive predictive value (PPV) due to the influence of target disease prevalence. We show the importance of disclosing the false discovery (FDR) and false omission rates (FOR) and argue that this transparency enables clinicians to select AI systems that balance false positive and false negative costs in a clinically, ethically, and financially appropriate manner. Finally, we provide recommendations for what data should be provided to best serve practices and radiologists.
Salim, A.; Allen, M.; Mariki, K.; Pallangyo, T.; Maina, R.; Mzee, F.; Minja, M.; Msovela, K.; Liana, J.
Show abstract
In the context of global health, the ability of frontline primary health providers to identify potential Drug-Drug Interactions (DDIs) is a critical component of patient safety. This is particularly true in settings like Tanzania, where drug dispensers often serve as the primary point of contact for healthcare. In this study, we establish a baseline for drug decision-making capabilities across multiple cadres of healthcare providers in Kibaha, Tanzania. We specifically distinguish between the ability to recognize safe drug combinations versus harmful ones. The findings reveal a critical asymmetry in provider performance: while professional training improves the recognition of safe combinations, it provides no advantage over lay intuition (and in some cases, a significant disadvantage) in detecting potentially harmful interactions.
Rai, K.; Bianchina, N.; Fischer, C.; Clawson, J.; McBeth, L.; Gottenborg, E.; Keniston, A.; Burden, M.
Show abstract
Purpose: High clinical workload is associated with worse patient and hospital outcomes and is a well-established driver of clinician burnout. Trainees may be particularly exposed, shouldering both clinical and educational responsibilities. Evidence-based work design offers a data-driven approach to healthcare work but relies on robust workload measurements. Trainee workload remains poorly characterized, as commonly used metrics (e.g., duty hours, patient census) overlook cognitive and contextual dimensions. This pilot evaluated the feasibility of combining survey-based and electronic health record (EHR) data to characterize internal medicine (IM) trainee workload. Methods: A pilot study was conducted including IM and Medicine-Pediatrics residents (postgraduate years 1-4) between March 31 and June 22, 2025. Participants completed daily surveys during a seven-day inpatient schedule assessing workload and work experience domains, including environment, professional fulfillment, psychological safety, autonomy, and rounding experience, using validated instruments where available. Concurrently, EHR data captured chart review, documentation, orders, and secure messaging activity. Associations between survey and EHR data were assessed. Results: Among 37 eligible residents, 28 (76%) participated in the pilot capturing 166 shifts. Trainees spent 4.4 +/- 1.6 (mean +/- SD) minutes completing daily surveys and 8.6 +/- 2.3 minutes completing the final survey. Trainees reported working 11.6 +/- 1.0 hours/day and a median census of 9.0 (IQR 6.0-11.0). NASA-TLX score was 50.8 +/- 12.6. Positive shift ratings were associated with lower NASA-TLX scores and perceived rounding length. First-to-last EHR login duration was 15 +/- 2 hours/day, and EHR data showed 204 +/- 46 active minutes/day. Login duration correlated with self-reported hours (r=0.43, p<0.0001), and notes signed correlated with self-reported team (r=0.19, p=0.013) and personal census (r=0.34, p<0.0001). Conclusions: Integrating survey-based and EHR-derived workload measures provides multidimensional insight into trainee work. This novel approach supports scalable measurement and evidence-based work design interventions to improve trainee well-being, education, and clinical efficiency.
Jin, X.; Zhang, L. L.; Li, H.; Gong, W.
Show abstract
Despite the global prevalence of postpartum depression (PPD), current referral uptake rates are far from satisfactory. While some qualitative studies have investigated factors affecting PPD referrals, a gap in quantitative analysis remains. Addressing this, our study utilized a discrete choice experiment (DCE) to understand the procedural elements influencing PPD referral uptake among diagnosed women. The DCE was conducted via home visits by healthcare providers and a comprehensive mobile app questionnaire. We constructed seven distinct referral attributes to explore participants' preferences, analyzed using mixed logit models and latent class analysis. This analysis identified key determinants and revealed the heterogeneities in referral preferences. A total of 698 individuals completed the DCE questionnaire. All assessed attributes, except for Accompaniment (going to clinic with a family member), were important determinants of preference. Participants generally preferred referrals to psychiatric clinics, face-to-face consultations, lower costs, and shorter waiting times. Significantly, participants' personal and socio-demographic characteristics also played a critical role in their referral preferences. Latent class analysis categorized participants into four distinct groups based on their preferences, with treatment cost and waiting times being the most decisive factors. In conclusion, the preference for PPD referrals is predominantly driven by convenience and access to specialist care. To enhance referral uptake, developing flexible and personalized referral programs that cater to these preferences is crucial.
Thomas, C.; Kim, J. Y.; Hasan, A.; Kpodzro, S.; Cortes, J.; Day, B.; Jensen, S.; LHuillier, S.; Oden, M. O.; Zumbado Segura, S.; Maurer, E. W.; Tucker, S.; Robinson, S.; Garcia, B.; Muramalla, E.; Lu, S.; Chawla, N.; Patel, M.; Balu, S.; Sendak, M.
Show abstract
Safety net healthcare delivery organizations (SNOs) serve vulnerable populations but face persistent challenges in adopting new technologies, including AI. While systematic barriers to technology adoption in SNOs are well documented, little is known about how AI is implemented in these settings. This study explored real-world AI adoption in SNOs, focusing on identifying barriers encountered across the AI lifecycle and strategies used to overcome them. Five SNOs in the U.S. participated in a 12-month technical assistance program, the Practice Network, to implement AI tools of their choosing. Observed barriers and mitigation strategies were documented throughout program activities and, at the conclusion of the program, reviewed and refined with participants using a participatory research approach to ensure findings reflected lived experiences and organizational contexts. Key barriers emerged during the Integration and Lifecycle Management phases and included gaps in AI performance evaluation and impact assessments, communication with patients about AI use, foundational AI education, financial resources for purchasing and maintaining AI tools, and AI governance structures. Effective strategies for addressing these barriers were primarily supported through centralized expertise, structured guidance, and peer learning. These findings provide granular, actionable insights for SNO leaders, offering guidance for anticipating barriers and proactively planning mitigation strategies. By including SNO perspectives, the study also contributes to the broader health AI ecosystem and underscores the importance of participatory, collaborative approaches to support safe, effective, and ethical AI adoption in resource-constrained settings. Author SummarySafety net organizations (SNOs) are healthcare systems that primarily serve low-income and underinsured patients. While interest in artificial intelligence (AI) in healthcare has grown rapidly, little is known about how these organizations experience AI adoption in practice. In this study, we partnered with five SNOs over a 12-month program to document the challenges they encountered when implementing AI tools and the strategies they used to address them. We worked closely with SNO staff throughout the process to ensure our findings reflected their lived experiences with AI implementation. We found that the most common challenges arose when organizations tried to integrate AI into daily operations and monitor and maintain those tools over time. Specific barriers included difficulty evaluating whether AI was performing as expected, limited guidance on communicating with patients about AI use, a lack of resources for staff training, limited financial resources, and the absence of formal governance structures. Successful strategies for overcoming these challenges drew on shared knowledge and structured support provided by the program, as well as learning from peer organizations. These findings offer practical guidance for SNO leaders planning or managing AI adoption, and contribute to a broader conversation about what is required to implement AI safely and effectively in healthcare settings that serve the most medically and socially vulnerable patients.
Zeng, A.; O'Hagan, E. T.; Trivedi, R.; Ford, B.; Perry, T.; Turnbull, S.; Sheahen, B.; Mulley, J.; Sedhom, M.; Choy, C.; Biasi, A.; Walters, S.; Miranda, J. J.; Chow, C. K.; Laranjo, L.
Show abstract
Background: Continuous adhesive patch electrocardiographic (ECG) wearables are increasingly prescribed. Patient experience with these devices can influence adherence, but research in this area is limited. This study aimed to explore the perceptions and experiences of patients receiving wearable cardiac monitoring technology as part of their routine care through the lens of treatment burden. Methods: This was a qualitative study with semi-structured phone interviews conducted between February and May 2024. We recruited participants from primary care and outpatient clinics using maximum variation sampling to ensure diversity in sex, ethnicity, and education levels. Interviews were audio-recorded, transcribed, and analysed using reflexive thematic analysis. Results: Sixteen participants (mean age 51 years, 63% female) were interviewed (average duration: 33 minutes). Three themes were developed: 1) ?Experience using the device: Burden vs Ease of Use?, which captured participants? perceptions of how easily they could integrate the device in their daily lives; 2) ?Individual variability in responses to ECG self-monitoring? covered participants? emotional and cognitive response to knowing their heart rhythm was monitored; and 3) ?The care process shapes patient experiences? reflected support preferences during the set-up and monitoring period and the uncertainty regarding timely clinical and device feedback. Conclusions: Patients valued cardiac wearables for facilitating diagnosis and felt reassured knowing they were clinically monitored. However, gaps in information provided to patients seemed to cause anxiety for some participants. These concerns could be mitigated through clearer clinician communication and patient education at the time of prescription.
Nayyar, C.; Xu, H. H.; Bates, A. T.; Conati, C.; Hilbers, D.; Avery, J.; Raman, S.; Fayaz-Bakhsh, A.; Nunez, J.-J.
Show abstract
Background: Artificial intelligence (AI) has rapidly garnered interest in healthcare, with research showing promise to improve quality, efficiency, and outcomes. Cancer care's multidisciplinary nature and high coordination demands are well positioned to benefit from AI. While attitudes in the uptake of evidence and toward the implementation of AI in medicine has been explored generally, literature remains scarce with specific regards to AI in cancer care. This study sought to understand how perspectives of both patients and professionals are essential for guiding responsible, effective implementation of evidence-based (EB) AI in cancer care. Methods: We conducted a workshop at the 2024 British Columbia (BC) Cancer Summit (Vancouver, Canada). Discussions addressed three guiding questions: concerns, benefits, and priorities for AI in cancer care. Responses from 48 workshop participants (patients and families, AI/computer science/cancer researchers, clinicians and allied health professionals, information technology professionals, healthcare administrators) underwent structured conceptualization by concept mapping, leveraging multidimensional scaling and hierarchical cluster and subcluster analysis to produce visual and quantitative maps of stakeholder priorities. Results: A total of 265 statements on perceived benefits, concerns, and priorities related to the implementation of AI in cancer care were generated from the workshop and underwent concept mapping. Two clusters were identified; Cluster 1 focused on "Challenges and Safeguards for AI Implementation," and Cluster 2 focused on "Clinical Benefits and Efficiency Gains." Subcluster analysis distinguished 8 thematic subclusters (4 per cluster). Both mean importance (P < .001) and feasibility (P < .001) ratings were significantly higher for Cluster 2. No differences were found between ratings by clinical and nonclinical professionals. Further go-zone analysis classified statements according to their relative superiority/inferiority in importance and feasibility compared to the overall average. Conclusions: Stakeholder ratings were higher for statements describing clinical benefits and efficiency gains than for those describing challenges and safeguards for AI implementation in cancer care. Concept mapping analysis distinguished between workflow-aligned AI applications, perceived as ready for implementation, and system-level governance requirements requiring longer-term investment. Present findings provide a structured, stakeholder-informed framework for prioritizing and sequencing AI implementation efforts in cancer care, constituting a practical blueprint to catalyze meaningful progress.
Usuzaki, T.; Matsunbo, E.; Inamori, R.
Show abstract
Despite the remarkable progress of artificial intelligence represented by large language models, how AI technologies can contribute to the construction of evidence in evidence-based medicine (EBM) remains an overlooked issue. Now, we need an AI that can be compatible with EBM. In the present paper, we aim to propose an example analysis that may contribute to this approach using variable Vision Transformer.
Pendharkar, S.; Blades, K.; Yazji, B.; Ayas, N.; Owens, R.; Kaminska, M.; Mackenzie, C.; Gershon, A.; Ratycz, D.; Lischenko, V.; Fenton, M. E.; McBrien, K.; Povitz, M.; Kendzerska, T.
Show abstract
Purpose: To understand how the Philips PAP device recall affected patient experiences, clinical practice, and health system responses. Methods: From November 2022 to August 2023, we interviewed individuals with OSA, physicians, respiratory therapists and health system leaders. We also received emailed responses from Health Canada. Interviews explored participants' experiences with the recall announcement and communication, their own responses and perceptions of actions taken by others, the overall impact of the recall and suggestions for improving future recall processes. Interviews were analyzed using an inductive thematic approach. Results: We interviewed 47 participants (16 individuals with OSA, 10 physicians, 17 public or private respiratory therapists, five health system leaders). Themes were organized into four domains: recall communication, execution, participant experiences, and the policy and regulatory context. Participants were confused due to inadequate information from Philips throughout the process. The burden of notifying patients and tracing devices mostly fell to healthcare providers and vendors, while replacement efforts were disorganized and frustrating. Individuals with OSA experienced emotional distress over therapy decisions and difficulties navigating the recall. Healthcare providers described moral distress from being unable to support patients adequately, and vendors faced additional logistical and financial strain. While regulatory authorities reported that Philips followed standard procedures, participants expressed a loss of trust in both the manufacturer and oversight systems. Conclusions: Interviews revealed that poor communication and execution of the Philips recall caused confusion, frustration and significant emotional and financial burden. Collaborative, context-specific strategies are required to improve future recalls.
Matimo, C. R.; Kacholi, G.; Mollel, H. A.
Show abstract
BackgroundDigital health plays an indispensable role in facilitating data analysis and use for enhancing healthcare delivery across health settings. However, there is scant information on the extent to which digital health influences the improvement of primary health services delivery through data use. This study examined the determinants that influence the use of digital health to improve health service delivery in council hospitals in Tanzania. MethodsA cross-sectional design was employed in six regions, involving 12 council hospitals. We used a self-administered questionnaire to collect data from 203 members of hospital quality improvement teams. Descriptive analysis was used to determine the frequency, proportion, and mean of responses, while bootstrapping analysis was conducted to test the statistically significant influence of digital health factors on data use for improving health service delivery. ResultsResults show moderate agreement on data compatibility for planning and decision-making, with 40.4% of respondents agreeing it supports ordering commodities, 43.8% for staff allocation, and 38.4% for planning. However, dissatisfaction was higher for user-friendliness (47.8%), reliability (up to 65.5%), and usefulness (up to 63.5%). Overall, 50.2% (M=2.74{+/-}0.87) disagreed that digital systems effectively support data use. Structural model analysis confirmed significant positive influence of usefulness ({beta}=0.199, p<0.001) and access to quality data ({beta}=0.729, p<0.001) on data use, which strongly impacted service delivery ({beta}=0.593, p<0.001), despite some factors showing no direct influence. ConclusionThe study finds that current digital health initiatives only modestly improve the user-friendliness, reliability, and usefulness of data systems, partly due to fragmented, non-interoperable platforms that burden data management. However, compatibility, usability, reliability, and usefulness of digital tools significantly enhance access to quality data and data-driven decisions. The study recommends strengthening and integrating existing systems and providing continuous digital health training to institutionalize data-informed decision-making.
ISMAIL, A. J.; MOETI, L.; DARKO, D. M.; WALKER, S.; SALEK, S.
Show abstract
Background Regulatory inconsistency across African countries contributes to duplicative scientific assessments, prolonged approval timelines, and delayed access to essential medical products. To inform the operationalisation of the African Medicines Agency (AMA), the African Medicines Regulatory Harmonisation (AMRH) programme implemented Africa's first continental pilot study for the scientific evaluation and listing of human medicinal products. This study evaluates the pilot's procedural performance and examines how continental scientific opinions were translated into national regulatory decisions through reliance mechanisms. Methods and Findings A mixed-methods programme evaluation was conducted using regulatory datasets generated during the pilot study. Quantitative data included assessment timelines, GMP inspection outcomes and national post-listing regulatory actions. Retrospective qualitative thematic analysis was applied to governance documents and National Regulatory Authority (NRA) feedback to identify legal, institutional and procedural determinants influencing uptake. Of 64 expressions of interest, 24 products progressed to full evaluation and 12 received positive continental scientific opinions. Ten met the predefined performance target of [≤]210 working days. Twenty-four GMP inspections identified no critical deficiencies and aligned with global regulatory benchmarks. National uptake demonstrated active reliance: full reliance (continental opinion as primary basis for national approval) for 7 products (58%); sequential reliance (continental assessment supplemented with targeted national queries) for 3 products (25%); and supplemented national review (separate national assessment undertaken) for 2 products (17%). Products with broader market strategies achieved registration in up to 23 African countries within a median of 77 working days post-listing. Variability in uptake reflected national legal authority, administrative requirements, and applicant submission strategies Conclusions The pilot study demonstrates the feasibility of a continent-wide regulatory assessment mechanism capable of producing trusted scientific outputs and enabling reliance-based national decision-making in Africa. While reliance was widely applied, heterogeneity in national procedures and administrative sequencing affected time to national registration. Findings provide empirical evidence to inform the AMA scale-up, highlighting the need for harmonised reliance pathways, streamlined administrative processes, and coordinated digital regulatory infrastructure.
Claus, L.; McNamara, M.; Oser, C.; Fogle, C.; Canine, B.
Show abstract
Cardiovascular disease (CVD) remains the leading cause of mortality in the United States, despite being largely preventable through effective management of risk factors. This study evaluates the impact of Phase II cardiac rehabilitation (CR) on functional capacity and quality of life, using data from the Montana Outcomes Project Cardiac Rehabilitation Registry. Functional capacity improvements were assessed via the six-minute walk test (6MWT) and Dartmouth COOP questionnaire, with statistical analyses exploring the influence of CR session attendance, demographic factors, and referring diagnoses. Results demonstrated significant gains in 6MWT, with a mean improvement of 330.73 feet (p < .0001), and quality of life scores across all subgroups. A dose-response relationship was observed, indicating greater improvements with increased CR sessions (p < .0001), though diminishing returns were observed beyond 24-35 visits. Demographic factors and complex conditions influenced outcomes, underscoring the need for tailored strategies to enhance CR access and effectiveness. These findings highlight the critical role of CR in improving patient outcomes and emphasize the importance of addressing barriers to participation in underserved populations.
Agumba, J.; Erick, S.; Pembere, A.; Nyongesa, J.
Show abstract
Abstract Objectives: To develop and evaluate a deployable deep learning system with Gradient-weighted Class Activation Mapping (Grad-CAM) for tuberculosis screening from chest radiographs and to assess its classification performance and explainability across desktop and mobile deployment platforms. Materials and methods: This study used publicly available chest X-ray datasets containing Normal and Tuberculosis images. A DenseNet121-based transfer learning model was trained using stratified training, validation, and test splits with data augmentation and class weighting. Model performance was evaluated using accuracy, precision, recall, F1 score, receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). Grad-CAM was used to visualize regions influencing model predictions. The trained model was converted to TensorFlow Lite and deployed in both a Windows desktop application and a Flutter-based mobile application for offline inference and visualization. Results: The model demonstrated strong classification performance on the independent test dataset, with high accuracy and AUC values indicating effective discrimination between Normal and Tuberculosis cases. Grad-CAM visualizations showed that the model focused primarily on anatomically relevant lung regions, particularly the upper and mid-lung fields in Tuberculosis cases. Deployment testing confirmed consistent prediction outputs and Grad-CAM visualizations across both Windows and mobile platforms. Conclusion: The proposed deployable deep learning system with Grad-CAM provides accurate and interpretable tuberculosis screening from chest radiographs and demonstrates feasibility for offline mobile and desktop deployment. This approach has potential as an artificial intelligence-assisted screening and decision support tool in radiology, particularly in resource-limited and remote healthcare settings.
Vollam, S.; Roman, C.; King, E.; Tarassenko, L.
Show abstract
A Wearable Monitoring System (WMS), comprising a chest patch, wrist-worn pulse oximeter, and arm-worn blood pressure device, was developed in preparation for a pilot Randomised Controlled Trial (RCT) on a UK surgical ward. The system was designed to support continuous physiological monitoring and early detection of deterioration. An initial prototype user interface was developed by the research team based on prior clinical experience and engineering knowledge. To ensure suitability for clinical practice, iterative user-centred refinement was undertaken through a series of clinician focus groups and wearability assessments. Six focus groups were conducted between November 2019 and May 2021 involving multidisciplinary healthcare professionals. Feedback from these sessions informed successive interface and system modifications. System development spanned the COVID-19 pandemic, during which the WMS was rapidly adapted and deployed to support clinical care on isolation wards. Feedback obtained during this period was incorporated into later versions of the system and provided a unique opportunity to examine changes in clinician priorities under pandemic conditions. Clinicians consistently prioritised alert visibility, alarm fatigue mitigation, parameter flexibility, and centralised monitoring. Notably, preferences regarding alert modality and access mechanisms evolved over time: early enthusiasm for mobile or smartphone-type devices shifted towards a preference for fixed, ward-based displays and audible alerts at the nurses station following pandemic deployment. Building on previous wearability testing in healthy volunteers, wearability testing using a validated questionnaire was completed by 169 patient participants during the RCT. The chest patch and pulse oximeter demonstrated high tolerability, whereas the blood pressure cuff showed poor wearability and was removed from the final system. These findings demonstrate the importance of iterative, clinician-led design for wearable WMS and highlight how extreme clinical contexts such as the COVID-19 pandemic can significantly reshape perceived requirements for safety-critical monitoring technologies.
Sumner, S. F.; Sakita, F. M.; Haukila, K. F.; Wanda, L.; Kweka, G. L.; Mlangi, J. J.; Shayo, P.; Tarimo, T. G.; Khanna, S.; Wang, C.; Pyne, A.; Manavalan, P.; Thielman, N. M.; Bettger, J. P.; Hertz, J. T.
Show abstract
Acute myocardial infarction (AMI) is an increasing cause of morbidity and mortality in Sub-Saharan Africa (SSA) but is often underdiagnosed and undertreated. To address this gap, the Multicomponent Intervention to Improve Myocardial Infarction Care (MIMIC) was developed and implemented in the emergency department (ED) of a regional referral center in northern Tanzania. We conducted in-depth interviews with 20 key stakeholders (physicians, nurses, administrators, and patients) who participated in MIMIC during the first year of implementation. Purposive sampling was used to recruit a broad range of participants. Interviews were guided by a semi-structured interview guide informed by the Theoretical Framework of Acceptability (TFA). Interview transcripts were thematically analyzed by a team of coders using an inductive, grounded theory approach guided by the seven TFA domains. Nineteen major themes emerged across all TFA domains. Overall, participants described MIMIC as highly acceptable, minimally burdensome, and well-aligned with professional and ethical values. Perceived effectiveness was most emphasized, with staff citing improvements in AMI recognition, ECG and troponin testing, and use of evidence-based therapies. All components were highlighted as effective and easily integrated into existing workflows. Patients valued the educational pamphlet for improving knowledge and self-efficacy, though staff expressed concerns about distributing it during acute care, contributing to inconsistent delivery. Champions were viewed as key in promoting adherence and sustaining implementation of the intervention. MIMIC was widely acceptable in all seven TFA domains among ED providers and patients, with perceived effectiveness driving positive attitudes across stakeholder groups. Use of a co-design approach in MIMIC development likely contributed to high intervention acceptability. Patient education strategies may require adaptation to improve fidelity. These findings suggest that continued implementation and future adaptation of MIMIC may be feasible.
Kizilaslan, B.; Mehlum, L.
Show abstract
Purpose: Suicide and self-harm are major public health concerns characterized by substantial clinical and psychosocial heterogeneity. While latent class analysis has been used to identify subgroups of people with suicidal behavior, the extent to which such population-level phenotyping complements explainable artificial intelligence-based classification models remain unclear. Methods: We applied latent class analysis to a cross-sectional, publicly available dataset of 1000 individuals presenting with self-harm and suicide-related behaviors at Colombo South Teaching Hospital, Kalubowila, Sri Lanka. Sociodemographic, psychosocial, and clinical variables were used to identify latent subgroups. Class characteristics and suicide prevalence were examined and compared with variable importance patterns reported in a previously published explainable artificial intelligence (XAI)-based suicide classification study using the same dataset. Results: Four latent classes were identified. Two classes exhibited very high suicide prevalence (91.2% [95% CI: 87.7-93.8] and 99.0% [95% CI: 96.4-99.7]), whereas two classes showed low prevalence (<1%). The two high-prevalence classes differed markedly in lifetime psychiatric hospitalization history, with one class showing a 100% prevalence of prior hospitalization and the other substantially lower hospitalization rates. These patterns partially aligned with, and extended beyond, variable importance findings from the XAI-based model. Conclusion: Latent class analysis identified distinct subgroups with substantially different suicide prevalence and clinical profiles, underscoring the heterogeneity of individuals presenting with self-harm. Comparison with XAI-based suicide classification model findings suggest that unsupervised phenotyping and supervised classification provide complementary perspectives, offering population-level context that may enhance the interpretability of suicide assessment frameworks. Keywords: suicide; self-harm; latent class analysis; explainable artificial intelligence; machine learning
Ware, O. D.
Show abstract
Synthetic cathinones, colloquially called bath salts or flakka, are a group of psychoactive substances used recreationally, including mephedrone and eutylone. Studies examining the prevalence of bath salt use among select samples in the United States have found that approximately 1% of nightclub attendees in New York, high school seniors, and college students have used them. The purpose of this study was to examine the national prevalence of lifetime and past-12-month use of bath salts among a nationally representative sample of persons in the United States from 2021 to 2023. This study also examined nationwide poison center data to identify the number of poisonings from 2021 to 2023 in which bath salt use was intentional and not necessarily an adulterant in another illicitly obtained recreational substance. This study identified the prevalence of lifetime bath salt use among a nationally representative sample of persons 12 years and older in the U.S. to be 0.2% (n = 670,611) in 2021, 0.3% (n = 838,941) in 2022, and 0.3% (n = 836,128) in 2023. The national prevalence of past-12-month bath salt use was 0.0% (n = 111,039) in 2021, 0.1% (n = 167,815) in 2022, and 0.1% (n = 152,276) in 2023. From 2021 to 2023, there were 148 cases in which bath salt use was intentional and involved in a reported poisoning to one of the 55 poison centers in the U.S. Future studies are needed to examine risk factors associated with bath salt-related poisonings.